Vladimir Itskov: Relating structure to function of recurrent networks
Talk by Vladimir Itskov of the University of Nebraska-Lincoln. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
There are two complementary perspectives on what controls neural activity in sensory systems: receptive fields and neural network dynamics. Both the structure of the underlying neural network and the structure of the represented stimuli impose constraints on what patterns of activity (the neural code) are possible. It is therefore natural to ask if one can directly relate the structure of a neuronal network to the stimuli it represents. A related question is how to design a network that realizes a prescribed neural code.
We address these questions in the context of a simple model, where the function of the recurrent network is to gate inputs so that only a selected set of persistent activity patterns is allowed. In this framework, one can analytically determine the set of all stable steady states (the "neural code") from the synaptic matrix alone, and these activity patterns are highly constrained even when the allowed inputs are not. If the allowed activity patterns are consistent with overlapping receptive fields, one can infer topological features of the underlying stimulus space. It turns out that one can also design networks whose neural codes reflect such topological features. We end with some progress on the more general problem of how to design a network with a prescribed neural code.